Formalizing Deceptive Reasoning in Breaking Bad : Default Reasoning in a Doxastic Logic
نویسنده
چکیده
The rich expressivity provided by the cognitive event calculus (CEC) knowledge representation framework allows for reasoning over deeply nested beliefs, desires, intentions, and so on. I put CEC to the test by attempting to model the complex reasoning and deceptive planning used in an episode of the popular television show Breaking Bad. CEC is used to represent the knowledge used by reasoners coming up with plans like the ones devised by the fictional characters I describe. However, it becomes clear that a form of nonmonotonic reasoning is necessary—specifically so that an agent can reason about the nonmonotonic beliefs of another agent. I show how CEC can be augmented to have this ability, and then provide examples detailing how my proposed augmentation enables much of the reasoning used by agents such as the Breaking Bad characters. I close by discussing what sort of reasoning tool would be necessary to implement such nonmonotonic reasoning. An old joke, said to be a favorite of Sigmund Freud, opens with two passengers, Trofim and Pavel, on a train leaving Moscow. Trofim begins by confronting Pavel, demanding to know where he is going. Pavel: “To Pinsk.” Trofim: “Liar! You say you are going to Pinsk in order to make me believe you are going to Minsk. But I know you are going to Pinsk!” (Cohen 2002) Fictional stories can sometimes capture aspects of deception in the real world, especially between individuals who are skilled at reasoning over the beliefs of others (secondorder beliefs), the beliefs of one party about the beliefs of another (third-order beliefs), and so on. For example, an agent a desiring to deceive agent b may need to take into account agent b’s counter-deception measures (where the latter measures may be directed back at agent a, as was suspected by poor Trofim). Such fictional stories may thus sometimes be a suitable source of test cases for frameworks specializing in the representation of, and reasoning over, complex doxastic statements. The cognitive event calculus (CEC) promises to be such a framework, given its ability to represent beliefs, knowledge, intentions, and desires over time (Arkoudas and Bringsjord 2009). Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. In this paper, I will attempt to model the reasoning used by agents in an episode of the television series Breaking Bad. Episode 13 of season 5, entitled To’hajiilee, is notably rich in deceptive behaviors between characters, being a point in the series’ overall story arc where the conflict between several consistently wily characters comes to a climax. One group (Jesse and Hank) devises a plan to lure, trap, and catch another character (Walt), and I try to answer two questions about their plan in this paper: First, what sort of reasoning and knowledge representation would be necessary to devise such a plan as the one created by Jesse and Hank? Second, is CEC sufficiently powerful to represent such knowledge and serve as a base framework for such reasoning? Section 1 will argue that even an analysis of how well CEC can model reasoning in a fictional story can be beneficial to the field of automated human-level reasoning, discussing related literature. I give an overview of CEC in Section 2, followed by a synopsis of the relevant portions of To’hajiilee’s plot (Section 3.1). An analysis of the plan generation used by the characters1 in Section 3.2 suggests the need for a form of nonmonotonic reasoning that requires, at a minimum, reasoning over second-order beliefs. I then spend some time explaining how this nonmonotonic reasoning can work in CEC. The paper wraps up with a discussion of implications for the future of deceptive and counter-deceptive AI (Section 5). 1 Why Bother Modeling Reasoning in Plots? The cognition of deception is particularly interesting to model: Knowing when to deceive in social situations may make for robots that are better accepted socially (Wagner and Arkin 2009; Sharkey and Sharkey 2011). Deceptive machines may indeed be the inevitable consequence, or perhaps explicit goal, of human-level AI (Castelfranchi 2000; Clark and Atkinson 2013). Instances of deception in fiction are not difficult to find. Some variant of deceptive behavior seems to appear in any story involving characters containing beliefs, intentions, and desires about the beliefs of other characters, depending on Of course, the characters I discuss here are fictional. I really am talking about the work of the writers of the show, who are reasoning from the perspectives of the fictional characters. It will be more convenient in this paper to simply say it is the fictional characters doing the reasoning. the definition of deception one accepts. Although some stories are better than others at accurately portraying realistic behaviors, all were written at some point by imaginative human beings (with some exceptions, cf. (Bringsjord and Ferrucci 1999)). They therefore offer clues about the human ability to think deceptively and counter deceptively; e.g., a plan of deception devised by a fictional character, at the very least, tells us what types of plans humans are capable of both comprehending (as the readers of a story do) and creatively generating (as the writers did). For researchers interested in understanding the expressivity of human-level thought, stories of deception are useful benchmarks. 2 An Overview of CEC The cognitive event calculus (CEC) is a first-order modal logic for knowledge representation first introduced by Arkoudas and Bringsjord (2009) as a way to model Piaget’s false-belief task. A member of the cognitive calculi family of logics (Bringsjord et al. 2015), CEC contains operators for several mental states and events: Belief, Knowledge, Intention, Desire, Common knowledge, and Speech acts. Note that not all of these operators are introduced in Arkoudas and Bringsjord (2009); rather, much of the current version of CEC reflects subsequent developments, most of which were produced in parallel with work on the deontic cognitive event calculus (DCEC∗), an extension of CEC (Bringsjord et al. 2014). CEC is loosely based on the event calculus (Kowalski and Sergot 1986), but departs from it and other similar logics in several important ways, two of which are especially relevant to this paper’s purposes: • Although no formal semantics is fully defined for CEC, there is a preference for proof-theoretic (and the highly related argument-theoretic) semantics and a natural deduction (Jaśkowski 1934) style of inference. Although there are some cases where cognitively implausible techniques such as resolution may assist in the proof-finding process, the underlying inferences are rooted in a set of constantly refined inference rules. • CEC rejects the use of logical operators and inference rules in contexts for which they were not designed. Standard deontic logic (SDL) made the mistake of trying to define an obligation operator as a direct analog of the necessity operator from standard modal logic, with disastrous consequences (Chisholm 1963; McNamara 2014). Most CEC formulae contain terms for at least one agent, a temporal unit, and a nested formula. For example, B(a, t, φ) is read “agent a believes φ at time t”. There are two exceptions to this form: C(t, φ) is read “all agents believe φ at time t”, and S(a, b, t, φ) says “at time t, agent a says φ to agent b”. The syntax used in this paper is pictured in Figure 1. 3 Can CEC Model Hank’s Deceptive Reasoning? The cognitive event calculus provides a formalism for representing cognitively rich knowledge, but as this section will Syntax S ::= Object | Agent | Self@ Agent | ActionType | Actionv Event | Moment | Boolean | Fluent | Numeric f ::= action : Agent⇥ActionType! Action initially : Fluent! Boolean holds : Fluent⇥Moment! Boolean happens : Event⇥Moment! Boolean clipped : Moment⇥Fluent⇥Moment! Boolean initiates : Event⇥Fluent⇥Moment! Boolean terminates : Event⇥Fluent⇥Moment! Boolean prior : Moment⇥Moment! Boolean interval : Moment⇥Boolean payoff : Agent⇥ActionType⇥Moment! Numeric t ::= x : S | c : S | f (t1 , . . . , tn) f ::= t : Boolean | ¬f | f^y | f_y | P(a, t,f) | K(a, t,f) | C(t,f) | S(a,b, t,f) | S(a, t,f) B(a, t,f) | D(a, t,holds( f , t0)) | I(a, t,happens(action(a⇤ ,a), t0)) Rules of Inference C(t,P(a, t,f)!K(a, t,f)) [R1 ] C(t,K(a, t,f)! B(a, t,f)) [R2 ] C(t,f) t t1 . . . t tn K(a1 , t1 , . . .K(an , tn ,f) . . .) [R3 ] K(a, t,f) f [R4 ] C(t,K(a, t1 ,f1 ! f2))!K(a, t2 ,f1)!K(a, t3 ,f2) [R5 ] C(t,B(a, t1 ,f1 ! f2))! B(a, t2 ,f1)! B(a, t3 ,f2) [R6 ] C(t,C(t1 ,f1 ! f2))! C(t2 ,f1)! C(t3 ,f2) [R7 ] C(t,8x. f! f[x 7! t]) [R8 ] C(t,f1 $ f2 ! ¬f2 ! ¬f1) [R9 ] C(t, [f1 ^ . . .^fn ! y]! [f1 ! . . .! fn ! y]) [R10 ] B(a, t,f) f! y B(a, t,y) [R11a ] B(a, t,f) B(a, t,y) B(a, t,y^f) [R11b ] S(s,h, t,f) B(h, t,B(s, t,f)) [R12 ] I(a, t,happens(action(a⇤ ,a), t0)) P(a, t,happens(action(a⇤ ,a), t)) [R13 ] B(a, t,f) B(a, t,O(a⇤ , t,f,happens(action(a⇤ ,a), t0))) O(a, t,f,happens(action(a⇤ ,a), t0)) K(a, t,I(a⇤ , t,happens(action(a⇤ ,a), t0))) [R14 ] f$ y O(a, t,f,g)$O(a, t,y,g) [R15 ] 1 Figure 1: The CEC Syntax Used in this Paper show, an augmentation is needed before the sort of reasoning used by the Breaking Bad characters can be faithfully modeled.
منابع مشابه
Reasoning About Exceptions?
In this paper we propose an exception logic – formalizing reasoning about exceptions. We use this logic to defend two claims. First, we argue that default logic – formalizing reasoning about default assumptions – is an extension of exception logic. A deconstruction argument shows that reasoning about exceptions is one of the first principles of reasoning about default assumptions. Second, we ar...
متن کاملTwo-phase Exception Logic
In this paper we propose an exception logic { formalizing reasoning about exceptions. We use this logic to defend two claims. First, we argue that default logic { formalizing reasoning about default assumptions { is an extension of exception logic. A deconstruction argument shows that reasoning about exceptions is one of the rst principles of reasoning about default assumptions. Second, we argu...
متن کاملA Modal Logic for Nonmonotonic Reasoning
An epistemic logic for defeasible reasoning using a meta-level architecture methaphor. Default theories of Poole-type and a method for constructing cumulative versions of default logic. In this paper we semantically investigated default reasoning from a dynamic, agent-oriented point of view. In order to do this we deened actions that model the reasoning by default of an agent. Execution of an a...
متن کاملTowards an awareness-based semantics for security protocol analysis
We report on work-in-progress on a new semantics for analyzing security protocols that combines complementary features of security logics and inductive methods. We use awareness to model the agents’ resource-bounded reasoning and, in doing so, capture a more appropriate notion of belief than those usually considered in security logics. We also address the problem of modeling interleaved protoco...
متن کاملEditorial: Special issue on non-classical mathematics
The 20th century witnessed not only incredible advances in the model theory and proof theory of classical logic, but also a corresponding advance of non-classical (or, as they are also called, non-standard) logics—i.e. the logics that either non-trivially extend, or actually compete with, classical logic of the Aristotelian, Stoic and Boolean tradition. Since the beginning of the 20th century, ...
متن کامل